Shane McQuarrie
Math 510
18 December 2015
The Lorenz system is one of the fundamental models for chaos. We are interested in examining a modification that extends the system to five dimensions. On the left is the original system, and on the right, the modified version.
$$ \begin{align} \nonumber &\dot{X} = \sigma \left(Y-X\right) && \dot{X} = \sigma \left(Y-X\right) - pU\\ \nonumber &\dot{Y} = rX-XZ-Y && \dot{Y} = rX-XZ-Y\\ \nonumber &\dot{Z} = XY-bZ && \dot{Z} = XY-bZ\\ \nonumber & && \dot{U} = -XV + pX - \sigma U\\ \nonumber & && \dot{V} = XU - qV \end{align} $$where $p = \frac{\sqrt{\sigma}b^{3/2}}{\epsilon\pi^2 2^{11/4}}$ and $q = (4-b)\sigma$.
We begin by importing the necessary modules.
%matplotlib inline
from matplotlib import rcParams, pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from scipy.integrate import odeint
from scipy.linalg import norm
import numpy as np
rcParams['figure.figsize'] = (16,10)
First we write a function for solving the original Lorenz equations. We may specifiy the system parameters $b$, $r$, and $\sigma$, as well as initial conditions, the time interval, and tolerances for the ODE solver.
For consistency, we'll use the same initial conditions repeatedly.
# Set initial conditions (changing this line will change the rest of the notebook).
initial = np.ones(5)*10
def solve_original(b=8/3., r=28., sig=10.,
x0=initial[:3], t=20, atol=1e-14, rtol=1e-12):
"""Generate a solution of the original Lorenz system."""
# Initialize remaining parameters.
time = np.linspace(0, t+1, t*100)
# Define the system.
def original(data, T):
"""Original Lorenz system."""
X,Y,Z = data
Xdot = sig*(Y-X)
Ydot = r*X - X*Z - Y
Zdot = X*Y - b*Z
return np.array([Xdot, Ydot, Zdot])
# Solve and return the solution.
return odeint(original, x0, time, atol=atol, rtol=rtol).T
To visualize the solutions of the system we write a function to plot a solution set in 3 dimensions. Unfortunately, IPython Notebook doesn't cooperate with animations very well, so the images in this notebook are static.
def plot_single(sol, color='b', title=None):
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(sol[0], sol[1], sol[2], color=color)
ax.set_xlabel('X'); ax.set_ylabel('Y'); ax.set_zlabel('Z')
ax.set_xlim3d([min(sol[0]), max(sol[0])])
ax.set_ylim3d([min(sol[1]), max(sol[1])])
ax.set_zlim3d([min(sol[2]), max(sol[2])])
plt.title(title)
plt.show()
plot_single(solve_original(), title="Original Lorenz System")
We similarly define a function to solve for the modified Lorenz system. Note the similarity betweeen the original and modified system solutions.
def solve_modified(b=8/3., r=28., sig=10., eps=.05,
x0=initial, t=20, atol=1e-14, rtol=1e-12):
"""Generate a solution of the modified Lorenz system."""
# Initialize remaining parameters.
q = (4 - b)*sig
p = (np.sqrt(sig)*b**(1.5))/(eps*np.pi**2*2**(11./4.))
time = np.linspace(0, t+1, t*100)
# Define the system.
def modified(data, T):
"""Modified Lorenz system."""
X,Y,Z,U,V = data
Xdot = sig*(Y - X) - p*U
Ydot = r*X - X*Z - Y
Zdot = X*Y - b*Z
Udot = -1.*X*V + p*X - sig*U
Vdot = X*U - q*V
return np.array([Xdot, Ydot, Zdot, Udot, Vdot])
# Solve and return the solution.
return odeint(modified, x0, time, atol=atol, rtol=rtol).T
def plot_double(sol1, sol2, color1='b', color2='r', title=None):
assert sol1.shape[1] == sol2.shape[1], "datasets not aligned"
fig = plt.figure()
ax = fig.gca(projection='3d')
curve1, = ax.plot(sol1[0], sol1[1], sol1[2], color=color1)
curve2, = ax.plot(sol2[0], sol2[1], sol2[2], color=color2)
ax.set_xlabel('X'); ax.set_ylabel('Y'); ax.set_zlabel('Z')
ax.set_xlim3d([min([min(sol1[0]), min(sol2[0])]),
max([max(sol1[0]), max(sol2[0])])])
ax.set_ylim3d([min([min(sol1[1]), min(sol2[1])]),
max([max(sol1[1]), max(sol2[1])])])
ax.set_zlim3d([min([min(sol1[2]), min(sol2[2])]),
max([max(sol1[2]), max(sol2[2])])])
plt.title(title)
plt.show()
s1 = solve_original()
s2 = solve_modified()
plot_single(s1, title="Original System", color='b')
plot_single(s2, title="Modified System", color='r')
plot_double(s1, s2, color1='b',color2='r',
title="Overlay of both solutions. The Blue is the Original System and the Red is the Modified System.")
The two solutions tend to be qualitatively similar in $\mathbb{R}^3$.
We analyze the stability of each of these systems visually (the rigorous stability of the ODE solver is a little beyond the scope of this project).
We will examine three types of perturbations: small changes in the initial condition, different tolerances in the ODE solver, and small changes in some of the system parameters.
Since the Lorenz system is highly chaotic, we expect that small changes in the initial conditions will result in significant changes in the solutions, provided that we allow the system to evolve for long enough.
To measure this effect, we allow the system to evolve for 3000 time intervals, and check the differences at intervals of 1000 units.
# Create a small perturbation and add it to the initial condition.
perturbation = np.random.randn(5)*.001
perturbed = initial + perturbation
# Solve the original and modified systems, with the original and perturbed initial conditions.
initial_org1 = solve_original(x0=initial[:3], t=30)
initial_org2 = solve_original(x0=perturbed[:3], t=30)
initial_mod1 = solve_modified(x0=initial, t=30)
initial_mod2 = solve_modified(x0=perturbed, t=30)
initial_diff1 = np.abs(np.mean(initial_org1-initial_org2, axis=0))
initial_diff2 = np.abs(np.mean(initial_mod1-initial_mod2, axis=0))
# Plot results.
plot_double(initial_org1, initial_org2, color1='b', color2='g',
title='Original System with Perturbed Initial Conditions')
plot_double(initial_mod1, initial_mod2, color1='r', color2='g',
title='Modified System with Perturbed Initial Conditions')
At first, the solutions appear to be identical. However, it doesn't take too long for them to separate, which is why we can see multiple colors. What's most interesting, however, is that the modified system appears to be a little less affected than the original system as time increases.
def analyze_difference(diff1, diff2, iters, verbose=True):
"""Print and plot statistics on the arrays `diff1` and `diff2` with increments `iters`."""
def analyze(diff):
"""Print the maximum, minimum, and average values in an array of differences."""
ma, me, no = np.max(diff), np.mean(diff), norm(diff)
if verbose is True:
print("""After {} units:
Maximum difference: {}
Average difference: {}
2-Norm of difference: {}
""".format(len(diff), ma, me, no))
return ma, me, no
org_maxs, org_means, org_norms = [], [], []
mod_maxs, mod_means, mod_norms = [], [], []
if verbose is True:
print("Original System --------------------------------------------------")
for stop in iters:
x1, y1, z1 = analyze(diff1[:stop])
org_maxs.append(x1); org_means.append(y1); org_norms.append(z1)
if verbose is True:
print("Modified System --------------------------------------------------")
for stop in iters:
x2, y2, z2 = analyze(diff2[:stop])
mod_maxs.append(x2); mod_means.append(y2); mod_norms.append(z2)
plt.subplot(121)
plt.plot(org_maxs, linewidth=2, label="Original: Max Difference")
plt.plot(org_means, linewidth=1, label="Original: Average Difference")
plt.plot(mod_maxs, linewidth=2, label="Modified: Max Difference")
plt.plot(mod_means, linewidth=1, label="Modified: Mean Difference", color='orange')
plt.xlabel("Time Units"); plt.ylabel("Difference")
plt.xticks(np.arange(len(iters)), iters)
plt.legend(loc="upper left")
plt.subplot(122)
plt.plot(org_norms, linewidth=3, label="Original: 2 Norm of Difference")
plt.plot(mod_norms, linewidth=3, label="Modified: 2 Norm of Difference", color="red")
plt.xlabel("Time Units"); plt.ylabel("Difference")
plt.xticks(np.arange(len(iters)), iters)
plt.legend(loc="upper left")
plt.show()
analyze_difference(initial_diff1, initial_diff2, (500, 1000, 1500, 2000, 2500, 3000))
Thus for our particular set of initial conditions, the modified system appears to be somewhat more stable than the original system with respect to perturbations in the initial conditions.
We can also solve the system from the same initial point, but with different solver tolerances. To see any differences, we need to let the system evolve for a little longer than before.
# Solve the original and modified systems, with different solver tolerances.
tol_org1 = solve_original(x0=initial[:3], atol=1e-14, rtol=1e-12, t=50)
tol_org2 = solve_original(x0=initial[:3], atol=1e-15, rtol=1e-13, t=50)
tol_mod1 = solve_modified(x0=initial, atol=1e-14, rtol=1e-12, t=50)
tol_mod2 = solve_modified(x0=initial, atol=1e-15, rtol=1e-13, t=50)
tol_diff1 = np.abs(np.mean(tol_org1-tol_org2, axis=0))
tol_diff2 = np.abs(np.mean(tol_mod1-tol_mod2, axis=0))
# Plot results.
plot_double(tol_org1, tol_org2, color1='b', color2='g', title='Original System with Different Solver Tolerances')
plot_double(tol_mod1, tol_mod2, color1='r', color2='g', title='Modified System with Different Solver Tolerances')
analyze_difference(tol_diff1, tol_diff2, (1000, 2000, 3000, 4000, 5000))
Once again, it appears that the modified system is a little more stable than the original system (at least, for these initial conditions). However, this result must be taken with a grain of salt--it probably has more to do with the instabilities of the ODE solver $\textbf{scipy.integrate.odeint()}$ than the instabilities of the systems.
The parameters $b=\frac{8}{3}, \sigma=10, r=28$ give rise to the standard butterfly-like manifold that we have seen in all prior solutions of both the original and modified Lorenz systems. Changing one of the parameters slightly should result in a change in the solutions, as with the other perturbations we have used so far. We choose to vary $r$ from $27.9$ to $28.1$.
# Solve the original and modified systems, with different solver tolerances.
param_org1 = solve_original(x0=initial[:3], r=27.9)
param_org2 = solve_original(x0=initial[:3], r=28.1)
param_mod1 = solve_modified(x0=initial, r=27.9)
param_mod2 = solve_modified(x0=initial, r=28.1)
param_diff1 = np.abs(np.mean(param_org1-param_org2, axis=0))
param_diff2 = np.abs(np.mean(param_mod1-param_mod2, axis=0))
# Plot results.
plot_double(param_org1, param_org2, color1='b', color2='g', title='Original System with Different Parameters')
plot_double(param_mod1, param_mod2, color1='r', color2='g', title='Modified System with Different Parameters')
analyze_difference(param_diff1, param_diff2, (500, 1000, 1500, 2000))
Once again, it looks like the modified system does a little better than the original system.
What can explain these differences? Part of it has to do with the way that the ODE solver scipy.integrate.odeint() works, as already mentioned. The main factor, however, is the difference in dimensions. In fact, upon closer inspection, we see how the last two dimensions (corresponding the variables $U$ and $V$) might be affecting the analysis given above.
def examine(diff):
for dim, dimension_array in zip(['X','Y','Z','U','V'], diff):
print("\t2 norm of {} dimension of solutions:".format(dim), norm(dimension_array))
print("Perturbed initial conditions")
examine(np.abs(initial_mod1 - initial_mod2))
print("Different solver tolerances")
examine(np.abs(tol_mod1 - tol_mod2))
print("Perturbed system parameters")
examine(np.abs(param_mod1 - param_mod2))
In every case, the 2-norm of the $U$ and $V$ dimensions of the solutions are significantly smaller than the norms of the $X$, $Y$, and $Z$ dimensions. Before, when we took the mean the array of absolute differences, the 4th and 5th dimensions skewed the total difference to the left. This is problematic; the fact that the final two dimensions have smaller norms is actually not very significant because of the nondimensional nature of the system, so those numbers should not factor much into our results.
We fix this by repeating some of our previous steps, but now only using the first three dimensions of the solutions to the modified system.
# Reset the difference vectors by only averaging the first three dimensions.
initial_diff2 = np.abs(np.mean((initial_mod1-initial_mod2)[:3], axis=0))
tol_diff2 = np.abs(np.mean((tol_mod1-tol_mod2)[:3], axis=0))
param_diff2 = np.abs(np.mean((param_mod1-param_mod2)[:3], axis=0))
# Recreate the plots from the previous three subsections.
plt.suptitle("Perturbations in Initial Conditions", fontsize=20)
analyze_difference(initial_diff1, initial_diff2, (500, 1000, 1500, 2000, 2500, 3000), False)
plt.suptitle("Different Solver Tolerances", fontsize=20)
analyze_difference(tol_diff1, tol_diff2, (1000, 2000, 3000, 4000, 5000), False)
plt.suptitle("Perturbed System Parameters", fontsize=20)
analyze_difference(param_diff1, param_diff2, (500, 1000, 1500, 2000), False)
These new plots provide a new perspective: the 'stability' of the modified system is not actually much better than that of the original system. In fact, it only seems to be generally more stable relative to changes in the solver tolerances, which doesn't tell us much about the nature of the system itself. The original system actaully seems to be a little better than the modified system with respect to perturbation in the initial conditions and system parameters.
From this analysis it is fair to conclude that the modified Lorenz system is about as stable as the original system. That is, they are not significantly different. The 4th and 5th dimensions of the modified system seem to be somewhat well-behaved, but in $\mathbb{R}^3$ it behaves about as well (or rather, about as poorly) as the original system.
Recall that the relative condition number $\kappa$ of a differentiable mapping $f: X \rightarrow Y$ between normed vector spaces ($f$ is referred to as the "problem") is given by
$$\kappa(\bf{x}) = \frac{\|\bf{J}(\bf{x})\|}{\|\bf{f}(\bf{x})\| / \|\bf{x}\|}$$(we will continue using the 2-norm for the remainder of this project).
For our two Lorenz systems, there are a few different conditioning problems that we can consider. The first is the problem of actually calculating the derivatives $\dot{X}$, $\dot{Y}$, $\dot{Z}$, $\dot{U}$ and $\dot{V}$ at a given point $(x,y,z,u,v)$. For the original system, we only use the first three dimensions.
For this problem, the Jacobian of the original system is given by $$J_{org}(x,y,z) = \left( \begin{array}{ccc} -\sigma & \sigma & 0 \\ r-Z & -1 & -X \\ Y & X & -b \end{array} \right)$$
and the Jacobian of the modified system is given by $$J_{mod}(x,y,z, u, v) = \left( \begin{array}{ccccc} -\sigma & \sigma & 0 & -p & 0\\ r-Z & -1 & -X & 0 & 0\\ Y & X & -b & 0 & 0\\ p-V & 0 & 0 & -\sigma & -X\\ U & 0 & 0 & X & -q\end{array} \right)$$
We will 'sample' the condition numbers at several points to try to estimate an upper bound on the condition number. In this case, we're actually estimating
$$\underset{\bf{x}\in\mathbb{R}^3}{\text{sup }}\kappa(\bf{x})$$p = (np.sqrt(10)*(8/3.)**(1.5))/(.05*np.pi**2*2**(11./4.))
# Define functions for evaluating the condition number of this problem.
def val_org(x, y, z):
"""Calculate the value of the original Lorenz system at a point (with standard system parameters)."""
return np.array([10*(y-x), 28*x - x*z - y, x*y - 8*z/3.])
def val_mod(x, y, z, u, v):
"""Calculate the value of the modified Lorenz system at a point (with standard system parameters)."""
return np.array([10*(y-x) - p*u, 28*x - x*z - y, x*y - 8*z/3., -x*v + p*x - 10*u, x*u - 40*v/3.])
def Jac_val_org(x, y, z):
"""Return the Jacobian of the original Lorenz system at a point (with standard system parameters)."""
return np.array([[ -10, 10, 0],
[28-z, -1, x],
[ y, x, -8/3.]])
def Jac_val_mod(x, y, z, u, v):
"""Return the Jacobian of the modified Lorenz system at a point (with standard system parameters)."""
return np.array([[ -10, 10, 0, -p, 0],
[28-z, -1, -x, 0, 0],
[ y, x, -8/3., 0, 0],
[ p-v, 0, 0, -10, -x],
[ u, 0, 0, x, -40/3.]])
def cond_val_org(x,y,z):
"""Calculate the condition number of calculating the derivatives of x, y, and z at the given point."""
return norm(Jac_val_org(x,y,z))/(norm(val_org(x,y,z))/norm(np.array([x,y,z])))
def cond_val_mod(x, y, z, u, v):
"""Calculate the condition number of calculating the derivatives of x, y, and z at the given point."""
return norm(Jac_val_mod(x,y,z,u,v))/(norm(val_mod(x,y,z,u,v))/norm(np.array([x,y,z,u,v])))
# Estimate the condition number with 50,000 random points.
max_org_cond, max_mod_cond = 0, 0
for i in range(50000):
point = np.random.random(5)*np.random.randint(-50,50,5)
try:
c1 = cond_val_org(point[0], point[1], point[2])
c2 = cond_val_mod(point[0], point[1], point[2], point[3], point[4])
except ZeroDivisionError:
continue
max_org_cond = max(max_org_cond, c1)
max_mod_cond = max(max_mod_cond, c2)
print("Relative Condition Number for Original System: {}".format(max_org_cond),
"Relative Condition Number for Modified System: {}".format(max_mod_cond), sep='\n')
For this problem, it appears that the problem of calculating the derivatives in the modified system is actually a little more well-conditioned than the problem of calculating the derivatives in the original system. However, both condition numbers are pretty low, suggesting that the ODE solver does not perform significantly worse than usual. If the numbers were very high, we would have to seriously question the analysis from the previous section.
To conclude, we estimate the condition number of another problem: that of finding the fixed points of the system.
Recall that a fixed point of a system $\bf{x}' = \bf{f}(\bf{x})$ is a point $\bf{\bar{x}}$ where all of the derivatives vanish. That is, we seek $\bf{\bar{x}}$ such that $\bf{f}(\bf{\bar{x}}) = \bf{0}$. Two of the three fixed points of both the original and modified systems in $\mathbb{R}^3$ are easy seen in the solution plots that we have seen so far as the attracting point on the solution manifold.
In our case, finding the fixed points amounts to solving the nonlinear systems
$$\begin{align} \nonumber &0 = \sigma \left(Y-X\right) && 0 = \sigma \left(Y-X\right) - pU\\ \nonumber &0 = rX-XZ-Y && 0 = rX-XZ-Y\\ \nonumber &0 = XY-bZ && 0 = XY-bZ\\ \nonumber & && 0 = -XV + pX - \sigma U\\ \nonumber & && 0 = XU - qV \end{align}$$Which can be reformulated as
$$\left( \begin{array}{ccc} -\sigma & \sigma & 0 \\ r & -1 & 0 \\ 0 & 0 & -b \end{array} \right) \left( \begin{array}{c}X\\Y\\Z\end{array}\right) + \left( \begin{array}{c}0\\-XZ\\XY\end{array}\right) = \left( \begin{array}{c}0\\0\\0\end{array}\right)$$for the original system and
$$\left( \begin{array}{ccccc} -\sigma & \sigma & 0 & -p & 0\\ r & -1 & 0 & 0 & 0\\ 0 & 0 & -b & 0 & 0\\ p & 0 & 0 & -\sigma & 0\\ 0 & 0 & 0 & 0 & -q\end{array} \right) \left( \begin{array}{c}X\\Y\\Z\\U\\V\end{array}\right) + \left( \begin{array}{c}0\\-XZ\\XY\\-XV\\XU\end{array}\right) = \left( \begin{array}{c}0\\0\\0\end{array}\right)$$for the modified system.
This is an awful problem to solve numerically! However, it is possible to get an explicit algebraic formula for the fixed points of each system. For the original system, we have the following:
$$0 = \sigma(Y - X) \Longrightarrow X = Y\\ 0 = XY-bZ \Longrightarrow Z = \frac{1}{b}X^2\\ 0 = rX-XZ-Y \Longrightarrow 0 = rX - \frac{1}{b}X^3 - X \Longrightarrow X = 0, \pm\sqrt{b(r-1)}$$Then the three fixed points for the original system are $(0,0,0)$ and $(\pm\sqrt{b(r-1)}, \pm\sqrt{b(r-1)}, r-1)$, as long as $r>1$.
The explicit formula for the fixed points of the modified system is not nearly as clean. We do not show the complete derivation here, but a process similar to the one given above yields the following as the fixed points of the modified system:
$$X = \pm\sqrt{\frac{-\frac{p^2}{b} + \sigma\left(\frac{r-1}{q} - \frac{\sigma}{b}\right) \mp \sqrt{\left(\frac{p^2}{b} - \sigma(\frac{r-1}{q} - \frac{\sigma}{b})\right)^2 + \frac{4\sigma}{bq}\left(\sigma^2(r-1)-p^2\right)}}{\frac{2\sigma}{bq}}}\\ \\ \\ Y = \frac{brX}{X^2 + b}\\ Z = \frac{rX^2}{X^2 + b}\\ U = \frac{pqX}{X^2 + \sigma q}\\ V = \frac{pX^2}{X^2 + \sigma q}$$Where $p = \frac{\sqrt{\sigma}b^{3/2}}{\epsilon\pi^2 2^{11/4}}$ and $q = (4-b)\sigma$ as usual.
For this problem, the inputs are the system parameters $b$, and $r$ for the original system (since the fixed points do not depend at all on $\sigma$), plus $\sigma$ and $\epsilon$ in the modified system. We restrict the problem to computing the fixed point in the octant $\{(x,y,z) : x\ge0,y\ge0,z\le0 \}$. For the original system, this is the function
$$f_{org}(b, r) = \left(\begin{array}{c}\sqrt{b(r-1)}\\ \sqrt{b(r-1)}\\r-1\end{array}\right)$$Which has Jacobian $$J_{org}(b, r) = \left(\begin{array}{cc} \frac{r-1}{2\sqrt{b(r-1)}} & \frac{b}{2\sqrt{b(r-1)}}\\ \frac{r-1}{2\sqrt{b(r-1)}} & \frac{b}{2\sqrt{b(r-1)}}\\ 0 & 1\end{array}\right)$$
For the modified system, however, this is a much uglier problem. The function is given by
$$f_{mod}(b, r, \sigma, \epsilon) = \left(\begin{array}{c}X\\ \frac{brX}{X^2 + b}\\ \frac{rX^2}{X^2 + b}\\ \frac{pqX}{X^2 + \sigma q}\\ \frac{pX^2}{X^2 + \sigma q}\end{array}\right)$$where
$$X = \sqrt{\frac{\sigma\left(\frac{r-1}{q} - \frac{\sigma}{b}\right) -\frac{p^2}{b} - \sqrt{\left(\sigma(\frac{r-1}{q} - \frac{\sigma}{b}) -\frac{p^2}{b}\right)^2 + \frac{4\sigma}{bq}\left(\sigma^2(r-1)-p^2\right)}}{\frac{2\sigma}{bq}}}$$The Jacobian is quite difficult to construct analytically. Instead, we use the symmetric difference quotient as an esitmate for the derivatives:
$$f'(x) = \underset{h\rightarrow 0}{\text{ lim }}\frac{f(x+h) - f(x-h)}{2h}$$which yields a good approximation $$f'(x) \approx \frac{f(x+h) - f(x-h)}{2h}\text{ with }h = \sqrt{\epsilon_{machine}x}$$
def fix_org(b,r):
"""Calculate a fixed point of the original Lorenz system."""
term = np.sqrt(b*(r-1))
return np.array([term, term, r-1])
def fix_mod(b,r,sig,eps):
"""Calculate a fixed point of the modified Lorenz system."""
p = (np.sqrt(sig)*(b)**(1.5))/(eps*np.pi**2*2**(11./4.))
q = (4-b)*r
A = sig*((r-1)/q - sig/b) - p**2/b
B = ((4*sig)/(b*q))*((r-1)*sig**2 - p**2)
X = (A - np.sqrt(A**2 + B))/((2*sig)/(b*q))
C = r*X/(X**2 + b)
D = p*X/(X**2 + sig*q)
Y = b*C
Z = X*C
U = q*D
V = X*D
return np.array([X,Y,Z,U,V])
def Jac_fix_org(b,r):
"""The Jacobian for calculating a fixed point of the original Lorenz system."""
numer = r-1
denom = 2*np.sqrt(b*(numer))
left = numer/denom
right = b/denom
return np.array([[left, right],[left, right],[0,1]])
def Jac_fix_mod(b,r,sig,eps):
"""Numerically estimate the Jacobian for calculating a fixed point of the modified Lorenz system."""
# Define function for calculating X, Y, Z, U, and V.
def P(b_,sig_,eps_):
return (np.sqrt(sig_)*b_**(1.5))/(eps_*np.pi**2*2**(11./4.))
def Q(b_,r_):
return (4-b_)*r_
def X(b_,r_,sig_,eps_):
p = P(b_,sig_,eps_)
q = Q(b_,r_)
A = sig*((r-1)/q - sig/b) - p**2/b
B = ((4*sig)/(b*q))*((r-1)*sig**2 - p**2)
return (A - np.sqrt(A**2 + B))/((2*sig)/(b*q))
def Y(b_,r_,sig_,eps_):
x = X(b_,r_,sig_,eps_)
return b_*r_*x/(x**2 + b_)
def Z(b_,r_,sig_,eps_):
x = X(b_,r_,sig_,eps_)
return r_*x**2/(x**2 + b)
def U(b_,r_,sig_,eps_):
p = P(b_,sig_,eps_)
q = Q(b_,r_)
x = X(b_,r_,sig_,eps_)
return p*q*x/(x**2 + sig_*q)
def V(b_,r_,sig_,eps_):
p = P(b_,sig_,eps_)
q = Q(b_,r_)
x = X(b_,r_,sig_,eps_)
return p*x**2/(x**2 + sig_*q)
# Estimate the Jacobian using the symmetric difference quotient.
h_b = 1e-8*np.sqrt(b)
h_r = 1e-8*np.sqrt(r)
h_s = 1e-8*np.sqrt(sig)
h_e = 1e-8*np.sqrt(eps)
dXdb = (X(b+h_b,r,sig,eps) - X(b-h_b,r,sig,eps))/(2*h_b)
dXdr = (X(b,r+h_r,sig,eps) - X(b,r-h_r,sig,eps))/(2*h_r)
dXds = (X(b,r,sig+h_s,eps) - X(b,r,sig-h_s,eps))/(2*h_s)
dXde = (X(b,r,sig,eps+h_e) - X(b,r,sig,eps-h_e))/(2*h_e)
dYdb = (Y(b+h_b,r,sig,eps) - Y(b-h_b,r,sig,eps))/(2*h_b)
dYdr = (Y(b,r+h_r,sig,eps) - Y(b,r-h_r,sig,eps))/(2*h_r)
dYds = (Y(b,r,sig+h_s,eps) - Y(b,r,sig-h_s,eps))/(2*h_s)
dYde = (Y(b,r,sig,eps+h_e) - Y(b,r,sig,eps-h_e))/(2*h_e)
dZdb = (Z(b+h_b,r,sig,eps) - Z(b-h_b,r,sig,eps))/(2*h_b)
dZdr = (Z(b,r+h_r,sig,eps) - Z(b,r-h_r,sig,eps))/(2*h_r)
dZds = (Z(b,r,sig+h_s,eps) - Z(b,r,sig-h_s,eps))/(2*h_s)
dZde = (Z(b,r,sig,eps+h_e) - Z(b,r,sig,eps-h_e))/(2*h_e)
dUdb = (U(b+h_b,r,sig,eps) - U(b-h_b,r,sig,eps))/(2*h_b)
dUdr = (U(b,r+h_r,sig,eps) - U(b,r-h_r,sig,eps))/(2*h_r)
dUds = (U(b,r,sig+h_s,eps) - U(b,r,sig-h_s,eps))/(2*h_s)
dUde = (U(b,r,sig,eps+h_e) - U(b,r,sig,eps-h_e))/(2*h_e)
dVdb = (V(b+h_b,r,sig,eps) - V(b-h_b,r,sig,eps))/(2*h_b)
dVdr = (V(b,r+h_r,sig,eps) - V(b,r-h_r,sig,eps))/(2*h_r)
dVds = (V(b,r,sig+h_s,eps) - V(b,r,sig-h_s,eps))/(2*h_s)
dVde = (V(b,r,sig,eps+h_e) - V(b,r,sig,eps-h_e))/(2*h_e)
return np.array([[dXdb, dXdr, dXds, dXde],
[dYdb, dYdr, dYds, dYde],
[dZdb, dZdr, dZds, dZde],
[dUdb, dUdr, dUds, dUde],
[dVdb, dVdr, dVds, dVde]])
def cond_fix_org(b,r):
"""Calculate the condition number for calculating a fixed point of the original Lorenz system."""
return norm(Jac_fix_org(b,r))/(norm(fix_org(b,r))/norm(np.array([b,r])))
def cond_fix_mod(b,r,sig,eps):
"""Calculate the condition number for calculating a fixed point of the original Lorenz system."""
return norm(Jac_fix_mod(b,r,sig,eps))/(norm(fix_mod(b,r,sig,eps))/norm(np.array([b,r,sig,eps])))
# Calculate the condition number with standard system parameters.
print("Relative Condition Number for Original System:",cond_fix_org(8/3.,28))
print("Relative Condition Number for Modified System:",cond_fix_mod(8/3.,28,10,.05))
For the original system, the condition number is fantastic. For the modified system, the condition number isn't terrible, but it's much worse than that of the original system. What happened?
We may expect that the larger number of dimensions in the modified system will inflate the condition number somewhat. Also, because the Jacobian is only an approximation (and probably a pretty poor one), we may take the condition number with a grain of salt. However, it is clear that the problem of calculating the fixed points of the modified system is a little more ill-conditioned than the problem of calculating the fixed points of the original system.
The general result we can pull from these two analyses is that our modified Lorenz system is not significantly worse (or better) than the original Lorenz system in terms of stability and conditioning. This may seem like a boring result, but it is actually good to know that the modified system behaves similarly to the original system, indicating that the modified system is a legitimate variant of the original.